Omnidirectional, or 360Degree, cameras are able to capture the surrounding space, thus providing\nan immersive experience when the acquired data is viewed using head mounted displays.\nSuch an immersive experience inherently generates an illusion of being in a virtual environment.\nThe popularity of 360Degree media has been growing in recent years. However, due to the large amount\nof data, processing and transmission pose several challenges. To this aim, efforts are being devoted\nto the identification of regions that can be used for compressing 360Degree images while guaranteeing\nthe immersive feeling. In this contribution, we present a saliency estimation model that considers\nthe spherical properties of the images. The proposed approach first divides the 360Degree image into multiple\npatches that replicate the positions (viewports) looked at by a subject while viewing a 360Degree image\nusing a head mounted display. Next, a set of low-level features able to depict various properties of\nan image scene is extracted from each patch. The extracted features are combined to estimate the 360Degree\nsaliency map. Finally, bias induced during image exploration and illumination variation is fine-tuned\nfor estimating the final saliency map. The proposed method is evaluated using a benchmark 360Degree image\ndataset and is compared with two baselines and eight state-of-the-art approaches for saliency estimation.\nThe obtained results show that the proposed model outperforms existing saliency estimation models.
Loading....